Skip to main content

Pierrotlc's group workspace

ADAIN - 64x64

What makes this group special?
Tags

helpful-sea-230

Notes
Author
State
Failed
Start time
February 2nd, 2022 2:41:19 PM
Runtime
5m 35s
Tracked hours
5m 32s
Run path
pierrotlc/AnimeStyleGAN/1srzi562
OS
Linux-5.15.15-76051515-generic-x86_64-with-glibc2.10
Python version
3.8.5
Git repository
git clone git@github.com:Futurne/AnimeStyleGAN.git
Git state
git checkout -b "helpful-sea-230" cd96afac9be52fcf0ab96cd97d8c2233f7f68545
Command
launch_training.py
System Hardware
CPU count16
GPU count1
GPU typeNVIDIA GeForce RTX 3080 Laptop GPU
W&B CLI Version
0.12.9
Config

Config parameters are your model's inputs. Learn more

  • {} 37 keys
    • 256
    • [] 2 items
      • 0.5
      • 0.99
    • [] 2 items
      • 0.5
      • 0.5
    • "<torch.utils.data.dataloader.DataLoader object at 0x7fe0758590d0>"
    • "cuda"
    • 64
    • 196
    • 0.3
    • 100
    • 0.1
    • 0.1
    • 0.0001
    • 0.0001
    • [] 1 item
      • 25
    • [] 1 item
      • 25
    • 512
    • 12
    • 2
    • 10
    • 3
    • 5
    • 4
    • 10
    • "Discriminator( (first_conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(3, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (blocks): ModuleList( (0): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(12, 24, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (1): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(24, 48, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (2): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(48, 96, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (3): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(96, 192, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (4): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(192, 384, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (5): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(384, 768, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) ) (classify): Sequential( (0): Conv2d(768, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): Flatten(start_dim=1, end_dim=-1) ) )"
    • "StyleGAN( (mapping): MappingNetwork( (norm): LayerNorm((196,), eps=1e-05, elementwise_affine=True) (layers): ModuleList( (0): Sequential( (0): Linear(in_features=196, out_features=196, bias=True) (1): LayerNorm((196,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Linear(in_features=196, out_features=196, bias=True) (1): LayerNorm((196,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Linear(in_features=196, out_features=196, bias=True) (1): LayerNorm((196,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Linear(in_features=196, out_features=196, bias=True) (1): LayerNorm((196,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) (out): Linear(in_features=196, out_features=196, bias=True) ) (synthesis): SynthesisNetwork( (blocks): ModuleList( (0): SynthesisBlock( (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=196, out_features=1024, bias=True) (A2): Linear(in_features=196, out_features=1024, bias=True) (B1): Conv2d(10, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (1): SynthesisBlock( (upsample): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=196, out_features=512, bias=True) (A2): Linear(in_features=196, out_features=512, bias=True) (B1): Conv2d(10, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (2): SynthesisBlock( (upsample): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=196, out_features=256, bias=True) (A2): Linear(in_features=196, out_features=256, bias=True) (B1): Conv2d(10, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (3): SynthesisBlock( (upsample): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=196, out_features=128, bias=True) (A2): Linear(in_features=196, out_features=128, bias=True) (B1): Conv2d(10, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (4): SynthesisBlock( (upsample): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=196, out_features=64, bias=True) (A2): Linear(in_features=196, out_features=64, bias=True) (B1): Conv2d(10, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (5): SynthesisBlock( (upsample): ConvTranspose2d(32, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=196, out_features=32, bias=True) (A2): Linear(in_features=196, out_features=32, bias=True) (B1): Conv2d(10, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) ) (to_rgb): Conv2d(16, 3, kernel_size=(1, 1), stride=(1, 1)) ) )"
    • "Adam ( Parameter Group 0 amsgrad: False betas: (0.5, 0.99) eps: 1e-08 initial_lr: 0.0001 lr: 0.0001 weight_decay: 0 )"
    • "Adam ( Parameter Group 0 amsgrad: False betas: (0.5, 0.5) eps: 1e-08 initial_lr: 0.0001 lr: 0.0001 weight_decay: 0 )"
    • 0.9
    • 0.9
    • 0
    • "<torch.optim.lr_scheduler.MultiStepLR object at 0x7fe086c1e550>"
    • "<torch.optim.lr_scheduler.MultiStepLR object at 0x7fe07582fd90>"
    • 0.5
    • 0.5
    • 0
    • 0
    • 1
Summary

Summary metrics are your model's outputs. Learn more

  • {} 10 keys
    • 0.7036973834037781
    • 0.7045273959636689
    • 0.705357426404953
    • 0.4946015954017639
    • 0.7300327003002167
    • 0.7300327599048615
    • {} 7 keys
      • 0.49479223489761354
      • 0.00000000371281965172
      • 0.0000000879530642095
    Artifact Outputs

    This run produced these artifacts as outputs. Total: 1. Learn more

    Loading...